Goto

Collaborating Authors

 weighting function


DA W: Exploring the Better Weighting Function for Semi-supervised Semantic Segmentation Supplementary Material Rui Sun 1 Huayu Mai

Neural Information Processing Systems

In the supplementary material, we first introduce the pseudo algorithm of DA W . Then we clarify the Then, we provide a more detailed explanation of Figures 1, 2, 4, and 5, which are slightly abbreviated due to the limited space of the main paper. In the naive pseudo-labeling method, all pseudo labels are enrolled into training, i.e., E 1 + E 2, which is guaranteed by theoretical functional analysis in the next section. Inequality 45 holds true at all times. In this section, we provide more qualitative results between ours and other competitors.




A Attribution methods for Concepts

Neural Information Processing Systems

In our case, it boils down to: ' The smoothing effect induced by the average helps to reduce the visual noise, and hence improves the explanations. For the experiment, m and are the same as SmoothGrad. We start by deriving the closed form of Saliency (SA) and naturally Gradient-Input (GI): ' The case of V arGrad is specific, as the gradient of a linear system being constant, its variance is null. W We recall that for Gradient Input, Integrated Gradients, Occlusion, ' It was quickly realized that they unified properties of various domains such as graph theory, linear algebra or geometry. Later, in the '60s, a connection was made At each step, the insertion metric selects the concepts of maximum score given a cardinality constraint.